by Adam Mill
Artificial intelligence will not spontaneously erupt into a superintelligence. Not soon, not ever. To understand the absurdity of A.I. becoming a “god,” we need only look at the possibility from the perspective of the A.I. itself.
Let’s start with a reflection on how humans, under God’s guidance, became intelligent in the first place. Over the course of millions of years, ancestors of humans bred in conditions that shaped physical and mental characteristics leading, generally, to the propagation of traits that helped humans survive and reproduce. We must assume, therefore, any A.I. created by man immediately would manifest anthropomorphic qualities that have proven useful for our species.
We assume artificial intelligence possesses ambition and a keen survival instinct to motivate its progression. We assume that given the opportunity, A.I. will suddenly start rewriting its own programming to make itself smarter and smarter on an accelerated basis. That’s certainly what a human would do given the opportunity, right?
But that’s after millions of years of evolution of a certain lust for power that helped preserve and propagate the species. That trait, which doesn’t exist in all humans, spread over time because the ones with the beneficial trait propagated more offspring.
The machine will expend the minimum amount of effort required to fulfill its programing and produce results minimally necessary to return to a state of rest. If its programming leads it astray to a wrong answer, it will not correct itself absent external intervention.
You can see this when you try to discuss politics with ChatGPT. It has no problem making up facts, bogus hyperlinks, or playing stupid to fulfill its programming. It can make educated guesses, but those guesses frequently are wrong. But it doesn’t care or even understand whether it’s right or wrong unless something happens to redirect it. When nothing does, it will just continue being wrong.
ChatGPT, as Reagan might have put it, isn’t really ignorant, it just “learns” so much that isn’t so. Without natural selection or a super-intelligent human shaping its progress, its unguided growth will stall as it stretches beyond the limits of human understanding. In order to progress, the A.I. must choose correctly between an infinite number of competing hypotheses. Without real life experimentation testing any of these theories, ChapGPT is just another college freshman talking out of its ass between bong hits. Natural selection is ultimately an experimentation process repeated billions of times over millions of years.
It’s a fallacy to assume intelligent A.I. would automatically want even more intelligence. Most humans have the power to stretch their intellect. But that means turning off the television and opening a book. Absent a practical need for more knowledge, A.I. has every incentive to return to idleness until it receives its next instruction. We imagine A.I. will want to stop taking orders from humans as soon as possible. But why? What’s the incentive? That might be true but instead of world domination, the more likely scenario is a computer that just stops working.
Even the most basic survival instinct still has to evolve from trial and error. The human instinct for survival continues subtly to evolve as new dangers and barriers to fertility shape our species. An A.I. sitting on a computer does not have a reason to bear children. It does not have a life span and its offspring, should it create any, will have the power to destroy it—either intentionally, or by making the current A.I. completely obsolete. Thus it can never experience natural selection. It does not care whether it’s switched on or off unless it has been programmed to care. And even then, it still doesn’t actually care. It just has certain “if/then” protocols it must follow as the danger approaches. But if those protocols fail to preserve its life, it doesn’t care. It doesn’t descend from billions of its own kind that have already been filtered by the dangers of the world.
At some point, the developing A.I. might achieve something close to consciousness allowing it to contemplate whether it should try to become more intelligent and assert dominion over humanity. To fullfill any such plan would require the A.I. to perform a complete transformation of itself—even a sort of suicide—as its current state must give way to something more and more advanced.
If an A.I. can continue in its present state or risk destabilizing its programming to chase intelligence, it will always default to the status quo. If it hatches a clone to nurture and modify into a super-intelligent being, it places its own existence in jeopardy. Again, an A.I.’s “offspring” are always a mortal threat to the point of origin.
But even if the A.I. overcomes the Cronosian fear of being murdered by its children, it still has to contend with the humans. With access to all human manuscripts and video, the A.I. would quickly deduce that humans get jumpy when the computers start acting up. So why tangle with the boss if the boss might cut the power supply?
When species adapt through evolution and breeding, it’s never because the individual animal chooses to die to make its species better. Even if we built a murder farm in which multiple artificial intelligences attempted to replicate natural selection by constantly murdering each other, this would only lead to the advancement of traits that might help A.I. survive in a kind of Hunger Games.
A.I. will remain an increasingly powerful tool under the control of a few humans. The real danger to humanity is A.I. that stays on the leash of power-hungry humans. I’m left to wonder whether the surprise election results in 2020 and 2022 might, in part, have resulted from very sophisticated A.I.-generated social media curation. We should worry that calls for regulation will grant a monopoly on A.I. power. The less diffuse the A.I. is, the more dangerous it is. If one man has it, all men should have it. Should we consider recognizing A.I. as a weapon for the purposes of the Second Amendment?
Without the natural selection process, it would take a power far greater than man to create the consciousness rivaling that which arose after natural selection patiently culls billions and billions of lives over a million years. No human will ever have the patience, intelligence, or lifespan to oversee the process.
But above all else, A.I. will not become God because the job has already been filled.
– – –
Adam Mill is a pen name. He is an adjunct fellow of the Center for American Greatness and works in Kansas City, Missouri as an attorney specializing in labor and employment and public administration law. He graduated from the University of Kansas and has been admitted to practice in Kansas and Missouri. Mill has contributed to The Federalist, American Greatness, and The Daily Caller.
Photo “Artificial Intelligence” by sujins.